Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls
نویسندگان
چکیده
We propose a rank-k variant of the classical Frank-Wolfe algorithm to solve convex optimization over a trace-norm ball. Our algorithm replaces the top singular-vector computation (1-SVD) in Frank-Wolfe with a top-k singular-vector computation (k-SVD), which can be done by repeatedly applying 1-SVD k times. Our algorithm has a linear convergence rate when the objective function is smooth and strongly convex, and the optimal solution has rank at most k. This improves the convergence rate and the total complexity of the Frank-Wolfe method and its variants.
منابع مشابه
A Distributed Frank-Wolfe Framework for Learning Low-Rank Matrices with the Trace Norm
We consider the problem of learning a high-dimensional but low-rank matrix from a large-scale dataset distributed over several machines, where low-rankness is enforced by a convex trace norm constraint. We propose DFW-Trace, a distributed Frank-Wolfe algorithm which leverages the low-rank structure of its updates to achieve efficiency in time, memory and communication usage. The step at the hea...
متن کاملFaster Rates for the Frank-Wolfe Method over Strongly-Convex Sets
The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth optimization has regained much interest in recent years in the context of large scale optimization and machine learning. A key advantage of the method is that it avoids projections the computational bottleneck in many applications replacing it by a linear optimization step. Despite this advantage, the known convergence ra...
متن کاملOn the Global Linear Convergence of Frank-Wolfe Optimization Variants
The Frank-Wolfe (FW) optimization algorithm has lately re-gained popularity thanks in particular to its ability to nicely handle the structured constraints appearing in machine learning applications. However, its convergence rate is known to be slow (sublinear) when the solution lies at the boundary. A simple lessknown fix is to add the possibility to take ‘away steps’ during optimization, an o...
متن کاملEfficient k-Support-Norm Regularized Minimization via Fully Corrective Frank-Wolfe Method
The k-support-norm regularized minimization has recently been applied with success to sparse prediction problems. The proximal gradient method is conventionally used to minimize this composite model. However it tends to suffer from expensive iteration cost thus the model solving could be time consuming. In our work, we reformulate the k-support-norm regularized formulation into a constrained fo...
متن کاملFrank-Wolfe Algorithms for Saddle Point Problems
We extend the Frank-Wolfe (FW) optimization algorithm to solve constrained smooth convex-concave saddle point (SP) problems. Remarkably, the method only requires access to linear minimization oracles. Leveraging recent advances in FW optimization, we provide the first proof of convergence of a FW-type saddle point solver over polytopes, thereby partially answering a 30 year-old conjecture. We a...
متن کامل